Search results

1 – 1 of 1
Article
Publication date: 5 July 2023

Haoqiang Yang, Xinliang Li, Deshan Meng, Xueqian Wang and Bin Liang

The purpose of this paper is using a model-free reinforcement learning (RL) algorithm to optimize manipulability which can overcome difficulties of dilemmas of matrix inversion…

Abstract

Purpose

The purpose of this paper is using a model-free reinforcement learning (RL) algorithm to optimize manipulability which can overcome difficulties of dilemmas of matrix inversion, complicated formula transformation and expensive calculation time.

Design/methodology/approach

Manipulability optimization is an effective way to solve the singularity problem arising in manipulator control. Some control schemes are proposed to optimize the manipulability during trajectory tracking, but they involve the dilemmas of matrix inversion, complicated formula transformation and expensive calculation time.

Findings

The redundant manipulator trained by RL can adjust its configuration in real-time to optimize the manipulability in an inverse-free manner while tracking the desired trajectory. Computer simulations and physics experiments demonstrate that compared with the existing methods, the average manipulability is increased by 58.9%, and the calculation time is reduced to 17.9%. Therefore, the proposed method effectively optimizes the manipulability, and the calculation time is significantly shortened.

Originality/value

To the best of the authors’ knowledge, this is the first method to optimize manipulability using RL during trajectory tracking. The authors compare their approach to existing singularity avoidance and manipulability maximization techniques, and prove that their method has better optimization effects and less computing time.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 1 of 1